Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Remote patient monitoring (RPM) is the use of digital technologies to improve patient care at a distance. However, current RPM solutions are often biased toward tech-savvy patients. To foster health equity, researchers have studied how to address the socio-economic and cognitive needs of diverse patient groups, but their emotional needs have remained largely neglected. We perform the first qualitative study to explore the emotional needs of diverse patients around RPM. Specifically, we conduct a thematic analysis of 18 interviews and 4 focus groups at a large US healthcare organization. We identify emotional needs that lead to four emotional tensions within and across stakeholder groups when applying an equity focus to the design and implementation of RPM technologies. The four emotional tensions are making diverse patients feel: (i) heard vs. exploited; (ii) seen vs. deprioritized for efficiency; (iii) empowered vs. anxious; and (iv) cared for vs. detached from care. To manage these emotional tensions across stakeholders, we develop design recommendations informed by a paradox mindset (i.e., both-and rather than and-or strategies).more » « lessFree, publicly-accessible full text available May 2, 2026
- 
            Free, publicly-accessible full text available March 1, 2026
- 
            BackgroundLaypeople have easy access to health information through large language models (LLMs), such as ChatGPT, and search engines, such as Google. Search engines transformed health information access, and LLMs offer a new avenue for answering laypeople’s questions. ObjectiveWe aimed to compare the frequency of use and attitudes toward LLMs and search engines as well as their comparative relevance, usefulness, ease of use, and trustworthiness in responding to health queries. MethodsWe conducted a screening survey to compare the demographics of LLM users and nonusers seeking health information, analyzing results with logistic regression. LLM users from the screening survey were invited to a follow-up survey to report the types of health information they sought. We compared the frequency of use of LLMs and search engines using ANOVA and Tukey post hoc tests. Lastly, paired-sample Wilcoxon tests compared LLMs and search engines on perceived usefulness, ease of use, trustworthiness, feelings, bias, and anthropomorphism. ResultsIn total, 2002 US participants recruited on Prolific participated in the screening survey about the use of LLMs and search engines. Of them, 52% (n=1045) of the participants were female, with a mean age of 39 (SD 13) years. Participants were 9.7% (n=194) Asian, 12.1% (n=242) Black, 73.3% (n=1467) White, 1.1% (n=22) Hispanic, and 3.8% (n=77) were of other races and ethnicities. Further, 1913 (95.6%) used search engines to look up health queries versus 642 (32.6%) for LLMs. Men had higher odds (odds ratio [OR] 1.63, 95% CI 1.34-1.99; P<.001) of using LLMs for health questions than women. Black (OR 1.90, 95% CI 1.42-2.54; P<.001) and Asian (OR 1.66, 95% CI 1.19-2.30; P<.01) individuals had higher odds than White individuals. Those with excellent perceived health (OR 1.46, 95% CI 1.1-1.93; P=.01) were more likely to use LLMs than those with good health. Higher technical proficiency increased the likelihood of LLM use (OR 1.26, 95% CI 1.14-1.39; P<.001). In a follow-up survey of 281 LLM users for health, most participants used search engines first (n=174, 62%) to answer health questions, but the second most common first source consulted was LLMs (n=39, 14%). LLMs were perceived as less useful (P<.01) and less relevant (P=.07), but elicited fewer negative feelings (P<.001), appeared more human (LLM: n=160, vs search: n=32), and were seen as less biased (P<.001). Trust (P=.56) and ease of use (P=.27) showed no differences. ConclusionsSearch engines are the primary source of health information; yet, positive perceptions of LLMs suggest growing use. Future work could explore whether LLM trust and usefulness are enhanced by supplementing answers with external references and limiting persuasive language to curb overreliance. Collaboration with health organizations can help improve the quality of LLMs’ health output.more » « lessFree, publicly-accessible full text available January 1, 2026
- 
            ImportanceVirtual patient-physician communications have increased since 2020 and negatively impacted primary care physician (PCP) well-being. Generative artificial intelligence (GenAI) drafts of patient messages could potentially reduce health care professional (HCP) workload and improve communication quality, but only if the drafts are considered useful. ObjectivesTo assess PCPs’ perceptions of GenAI drafts and to examine linguistic characteristics associated with equity and perceived empathy. Design, Setting, and ParticipantsThis cross-sectional quality improvement study tested the hypothesis that PCPs’ ratings of GenAI drafts (created using the electronic health record [EHR] standard prompts) would be equivalent to HCP-generated responses on 3 dimensions. The study was conducted at NYU Langone Health using private patient-HCP communications at 3 internal medicine practices piloting GenAI. ExposuresRandomly assigned patient messages coupled with either an HCP message or the draft GenAI response. Main Outcomes and MeasuresPCPs rated responses’ information content quality (eg, relevance), using a Likert scale, communication quality (eg, verbosity), using a Likert scale, and whether they would use the draft or start anew (usable vs unusable). Branching logic further probed for empathy, personalization, and professionalism of responses. Computational linguistics methods assessed content differences in HCP vs GenAI responses, focusing on equity and empathy. ResultsA total of 16 PCPs (8 [50.0%] female) reviewed 344 messages (175 GenAI drafted; 169 HCP drafted). Both GenAI and HCP responses were rated favorably. GenAI responses were rated higher for communication style than HCP responses (mean [SD], 3.70 [1.15] vs 3.38 [1.20];P = .01,U = 12 568.5) but were similar to HCPs on information content (mean [SD], 3.53 [1.26] vs 3.41 [1.27];P = .37;U = 13 981.0) and usable draft proportion (mean [SD], 0.69 [0.48] vs 0.65 [0.47],P = .49,t = −0.6842). Usable GenAI responses were considered more empathetic than usable HCP responses (32 of 86 [37.2%] vs 13 of 79 [16.5%]; difference, 125.5%), possibly attributable to more subjective (mean [SD], 0.54 [0.16] vs 0.31 [0.23];P < .001; difference, 74.2%) and positive (mean [SD] polarity, 0.21 [0.14] vs 0.13 [0.25];P = .02; difference, 61.5%) language; they were also numerically longer (mean [SD] word count, 90.5 [32.0] vs 65.4 [62.6]; difference, 38.4%), but the difference was not statistically significant (P = .07) and more linguistically complex (mean [SD] score, 125.2 [47.8] vs 95.4 [58.8];P = .002; difference, 31.2%). ConclusionsIn this cross-sectional study of PCP perceptions of an EHR-integrated GenAI chatbot, GenAI was found to communicate information better and with more empathy than HCPs, highlighting its potential to enhance patient-HCP communication. However, GenAI drafts were less readable than HCPs’, a significant concern for patients with low health or English literacy.more » « less
- 
            Abstract The COVID-19 pandemic has boosted digital health utilization, raising concerns about increased physicians’ after-hours clinical work (work-outside-work”). The surge in patients’ digital messages and additional time spent on work-outside-work by telemedicine providers underscores the need to evaluate the connection between digital health utilization and physicians’ after-hours commitments. We examined the impact on physicians’ workload from two types of digital demands - patients’ messages requesting medical advice (PMARs) sent to physicians’ inbox (inbasket), and telemedicine. Our study included 1716 ambulatory-care physicians in New York City regularly practicing between November 2022 and March 2023. Regression analyses assessed primary and interaction effects of (PMARs) and telemedicine on work-outside-work. The study revealed a significant effect ofPMARs on physicians’ work-outside-work and that this relationship is moderated by physicians’ specialties. Non-primary care physicians or specialists experienced a more pronounced effect than their primary care peers. Analysis of their telemedicine load revealed that primary care physicians received fewerPMARs and spent less time in work-outside-work with more telemedicine. Specialists faced increasedPMARs and did more work-outside-work as telemedicine visits increased which could be due to the difference in patient panels. ReducingPMARvolumes and efficient inbasket management strategies needed to reduce physicians’ work-outside-work. Policymakers need to be cognizant of potential disruptions in physicians carefully balanced workload caused by the digital health services.more » « less
- 
            Background Chatbots are being piloted to draft responses to patient questions, but patients’ ability to distinguish between provider and chatbot responses and patients’ trust in chatbots’ functions are not well established. Objective This study aimed to assess the feasibility of using ChatGPT (Chat Generative Pre-trained Transformer) or a similar artificial intelligence–based chatbot for patient-provider communication. Methods A survey study was conducted in January 2023. Ten representative, nonadministrative patient-provider interactions were extracted from the electronic health record. Patients’ questions were entered into ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider’s response. In the survey, each patient question was followed by a provider- or ChatGPT-generated response. Participants were informed that 5 responses were provider generated and 5 were chatbot generated. Participants were asked—and incentivized financially—to correctly identify the response source. Participants were also asked about their trust in chatbots’ functions in patient-provider communication, using a Likert scale from 1-5. Results A US-representative sample of 430 study participants aged 18 and older were recruited on Prolific, a crowdsourcing platform for academic studies. In all, 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. Overall, 53.3% (209/392) of respondents analyzed were women, and the average age was 47.1 (range 18-91) years. The correct classification of responses ranged between 49% (192/392) to 85.7% (336/392) for different questions. On average, chatbot responses were identified correctly in 65.5% (1284/1960) of the cases, and human provider responses were identified correctly in 65.1% (1276/1960) of the cases. On average, responses toward patients’ trust in chatbots’ functions were weakly positive (mean Likert score 3.4 out of 5), with lower trust as the health-related complexity of the task in the questions increased. Conclusions ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower-risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in health care.more » « less
- 
            Background Remote patient monitoring (RPM) technologies can support patients living with chronic conditions through self-monitoring of physiological measures and enhance clinicians’ diagnostic and treatment decisions. However, to date, large-scale pragmatic RPM implementation within health systems has been limited, and understanding of the impacts of RPM technologies on clinical workflows and care experience is lacking. Objective In this study, we evaluate the early implementation of operational RPM initiatives for chronic disease management within the ambulatory network of an academic medical center in New York City, focusing on the experiences of “early adopter” clinicians and patients. Methods Using a multimethod qualitative approach, we conducted (1) interviews with 13 clinicians across 9 specialties considered as early adopters and supporters of RPM and (2) speculative design sessions exploring the future of RPM in clinical care with 21 patients and patient representatives, to better understand experiences, preferences, and expectations of pragmatic RPM use for health care delivery. Results We identified themes relevant to RPM implementation within the following areas: (1) data collection and practices, including impacts of taking real-world measures and issues of data sharing, security, and privacy; (2) proactive and preventive care, including proactive and preventive monitoring, and proactive interventions and support; and (3) health disparities and equity, including tailored and flexible care and implicit bias. We also identified evidence for mitigation and support to address challenges in each of these areas. Conclusions This study highlights the unique contexts, perceptions, and challenges regarding the deployment of RPM in clinical practice, including its potential implications for clinical workflows and work experiences. Based on these findings, we offer implementation and design recommendations for health systems interested in deploying RPM-enabled health care.more » « less
- 
            The COVID-19 pandemic accelerated the adoption of remote patient monitoring technology, which offers exciting opportunities for expanded connected care at a distance. However, while the mode of clinicians’ interactions with patients and their health data has transformed, the larger framework of how we deliver care is still driven by a model of episodic care that does not facilitate this new frontier. Fully realizing a transformation to a system of continuous connected care augmented by remote monitoring technology will require a shift in clinicians’ and health systems’ approach to care delivery technology and its associated data volume and complexity. In this article, we present a solution that organizes and optimizes the interaction of automated technologies with human oversight, allowing for the maximal use of data-rich tools while preserving the pieces of medical care considered uniquely human. We review implications of this “augmented continuous connected care” model of remote patient monitoring for clinical practice and offer human-centered design-informed next steps to encourage innovation around these important issues.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
